86 research outputs found

    Computational inference and control of quality in multimedia services

    Get PDF
    Quality is the degree of excellence we expect of a service or a product. It is also one of the key factors that determine its value. For multimedia services, understanding the experienced quality means understanding how the delivered delity, precision and reliability correspond to the users' expectations. Yet the quality of multimedia services is inextricably linked to the underlying technology. It is developments in video recording, compression and transport as well as display technologies that enables high quality multimedia services to become ubiquitous. The constant evolution of these technologies delivers a steady increase in performance, but also a growing level of complexity. As new technologies stack on top of each other the interactions between them and their components become more intricate and obscure. In this environment optimizing the delivered quality of multimedia services becomes increasingly challenging. The factors that aect the experienced quality, or Quality of Experience (QoE), tend to have complex non-linear relationships. The subjectively perceived QoE is hard to measure directly and continuously evolves with the user's expectations. Faced with the diculty of designing an expert system for QoE management that relies on painstaking measurements and intricate heuristics, we turn to an approach based on learning or inference. The set of solutions presented in this work rely on computational intelligence techniques that do inference over the large set of signals coming from the system to deliver QoE models based on user feedback. We furthermore present solutions for inference of optimized control in systems with no guarantees for resource availability. This approach oers the opportunity to be more accurate in assessing the perceived quality, to incorporate more factors and to adapt as technology and user expectations evolve. In a similar fashion, the inferred control strategies can uncover more intricate patterns coming from the sensors and therefore implement farther-reaching decisions. Similarly to natural systems, this continuous adaptation and learning makes these systems more robust to perturbations in the environment, longer lasting accuracy and higher eciency in dealing with increased complexity. Overcoming this increasing complexity and diversity is crucial for addressing the challenges of future multimedia system. Through experiments and simulations this work demonstrates that adopting an approach of learning can improve the sub jective and objective QoE estimation, enable the implementation of ecient and scalable QoE management as well as ecient control mechanisms

    QoE for Mobile Streaming

    Get PDF
    No abstract

    Adaptive testing for video quality assessment

    Get PDF
    Optimizing the Quality of Experience and avoiding under or over provisioning in video delivery services requires understanding of how different resources affect the perceived quality. The utility of resources, such as bit-rate, is directly calculated by proportioningthe improvement in quality over the increase in costs. However, perception of quality in video is subjective and, hence, difficultand costly to directly estimate with the commonly used ratingmethods. Two-alternative-forced choice methods such asMaximum Likelihood Difference Scaling (MLDS) introduces less biases and variability, but only deliver estimates for relativedifference in quality rather than absolute rating. Nevertheless, thisinformation is sufficient for calculating the utility of the resourceon the video quality. In this work, we are presenting an adaptiveMLDS method, which incorporates an active test selectionscheme that improves the convergence rate and decreases theneed for executing the full range of tests

    Comparison of neural closure models for discretised PDEs

    Get PDF
    Neural closure models have recently been proposed as a method for efficiently approximating small scales in multiscale systems with neural networks. The choice of loss function and associated training procedure has a large effect on the accuracy and stability of the resulting neural closure model. In this work, we systematically compare three distinct procedures: “derivative fitting”, “trajectory fitting” with discretise-then-optimise, and “trajectory fitting” with optimise-then-discretise. Derivative fitting is conceptually the simplest and computationally the most efficient approach and is found to perform reasonably well on one of the test problems (Kuramoto-Sivashinsky) but poorly on the other (Burgers). Trajectory fitting is computationally more expensive but is more robust and is therefore the preferred approach. Of the two trajectory fitting procedures, the discretise-then-optimise approach produces more accurate models than the optimise-then-discretise approach. While the optimise-then-discretise approach can still produce accurate models, care must be taken in choosing the length of the trajectories used for training, in order to train the models on long-term behaviour while still producing reasonably accurate gradients during training. Two existing theorems are interpreted in a novel way that gives insight into the long-term accuracy of a neural closure model based on how accurate it is in the short term.<br/

    Comparative study of deep learning methods for one-shot image classification (abstract)

    Get PDF
    Training deep learning models for images classification requires large amount of labeled data to overcome the challenges of overfitting and underfitting. Usually, in many practical applications, these labeled data are not available. In an attempt to solve this problem, the one-shot learning paradigm tries to create machine learning models capable to learn well from one or (maximum) few labeled examples per class. To understand better the behavior of various deep learning models and approaches for one-shot learning, in this abstract, we perform a comparative study of the most used ones, on a challenging real-world dataset, i.e Fashion-MNIST

    Automated image segmentation of 3D printed fibrous composite micro-structures using a neural network

    Get PDF
    A new, automated image segmentation method is presented that effectively identifies the micro-structural objects (fibre, air void, matrix) of 3D printed fibre-reinforced materials using a deep convolutional neural network. The method creates training data from a physical specimen composed of a single, straight fibre embedded in a cementitious matrix with air voids. The specific micro-structure of this strain-hardening cementitious composite (SHCC) is obtained from X-ray micro-computed tomography scanning, after which the 3D ground truth mask of the sample is constructed by connecting each voxel of a scanned image to the corresponding micro-structural object. The neural network is trained to identify fibres oriented in arbitrary directions through the application of a data augmentation procedure, which eliminates the time-consuming task of a human expert to manually annotate these data. The predictive capability of the methodology is demonstrated via the analysis of a practical SHCC developed for 3D concrete printing, showing that the automated segmentation method is well capable of adequately identifying complex micro-structures with arbitrarily distributed and oriented fibres. Although the focus of the current study is on SHCC materials, the proposed methodology can also be applied to other fibre-reinforced materials, such as fibre-reinforced plastics. The micro-structures identified by the image segmentation method may serve as input for dedicated finite element models that allow for computing their mechanical behaviour as a function of the micro-structural composition

    A regression method for real-time video quality evaluation

    Get PDF
    No-Reference (NR) metrics provide a mechanism to assess video quality in an ever-growing wireless network. Their low computational complexity and functional characteristics make them the primary choice when it comes to realtime content management and mobile streaming control. Unfortunately, common NR metrics suer from poor accuracy, particularly in network-impaired video streams. In this work, we introduce a regression-based video quality metric that is simple enough for real-time computation on thin clients, and comparably as accurate as state-of-the-art Full-Reference (FR) metrics, which are functionally and computationally inviable in real-time streaming. We benchmark our metric against the FR metric VQM (Video Quality Metric), finding a very strong correlation factor

    You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets

    Get PDF
    Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find untrained sparse subnetworks at the initialization, that can match the performance of fully trained dense GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB)

    Overview of the TCV tokamak experimental programme

    Get PDF
    The tokamak a configuration variable (TCV) continues to leverage its unique shaping capabilities, flexible heating systems and modern control system to address critical issues in preparation for ITER and a fusion power plant. For the 2019-20 campaign its configurational flexibility has been enhanced with the installation of removable divertor gas baffles, its diagnostic capabilities with an extensive set of upgrades and its heating systems with new dual frequency gyrotrons. The gas baffles reduce coupling between the divertor and the main chamber and allow for detailed investigations on the role of fuelling in general and, together with upgraded boundary diagnostics, test divertor and edge models in particular. The increased heating capabilities broaden the operational regime to include T (e)/T (i) similar to 1 and have stimulated refocussing studies from L-mode to H-mode across a range of research topics. ITER baseline parameters were reached in type-I ELMy H-modes and alternative regimes with \u27small\u27 (or no) ELMs explored. Most prominently, negative triangularity was investigated in detail and confirmed as an attractive scenario with H-mode level core confinement but an L-mode edge. Emphasis was also placed on control, where an increased number of observers, actuators and control solutions became available and are now integrated into a generic control framework as will be needed in future devices. The quantity and quality of results of the 2019-20 TCV campaign are a testament to its successful integration within the European research effort alongside a vibrant domestic programme and international collaborations
    • …
    corecore